Goto

Collaborating Authors

 pressure sensor



Underwater Visual-Inertial-Acoustic-Depth SLAM with DVL Preintegration for Degraded Environments

Ding, Shuoshuo, Zhang, Tiedong, Jiang, Dapeng, Lei, Ming

arXiv.org Artificial Intelligence

Abstract--Visual degradation caused by limited visibility, insufficient lighting, and feature scarcity in underwater environments presents significant challenges to visual-inertial simultaneous localization and mapping (SLAM) systems. The key innovation lies in the tight integration of four distinct sensor modalities to ensure reliable operation, even under degraded visual conditions. To mitigate DVL drift and improve measurement efficiency, we propose a novel velocity-bias-based DVL preintegration strategy. At the frontend, hybrid tracking strategies and acoustic-inertial-depth joint optimization enhance system stability. Additionally, multi-source hybrid residuals are incorporated into a graph optimization framework. Extensive quantitative and qualitative analyses of the proposed system are conducted in both simulated and real-world underwater scenarios. The results demonstrate that our approach outperforms current state-of-the-art stereo visual-inertial SLAM systems in both stability and localization accuracy, exhibiting exceptional robustness, particularly in visually challenging environments. UMAN activities in the fields of ocean engineering and marine science are increasing steadily, encompassing scientific expeditions to study underwater hydrothermal vents and archaeological sites, inspections and maintenance of subsea pipelines and reservoirs, and salvage operations for wrecked aircraft and vessels. Shuoshuo Ding, Tiedong Zhang and Dapeng Jiang are with School of Ocean Engineering and T echnology & Southern Marine science and Engineering Guangdong Laboratory (Zhuhai), Sun Y at-sen University, Zhuhai 519082, China, with Guangdong Provincial Key Laboratory of Information T echnology for Deep Water Acoustics, Zhuhai 519082, China, and also with Key Laboratory of Comprehensive Observation of Polar Environment (Sun Y at-sen University), Ministry of Education, Zhuhai 519082, China (e-mail: dingshsh5@mail2.sysu.edu.cn,


Efficient Force and Stiffness Prediction in Robotic Produce Handling with a Piezoresistive Pressure Sensor

Fairchild, Preston, Chen, Claudia, Tan, Xiaobo

arXiv.org Artificial Intelligence

Abstract: Properly handling del i cate produce with robotic manipulators is a major part of the future role of automation in agricultural harvesting and processing . Grasping with the correct amount of force is crucial in not only ensuring proper grip on the object, but also to avoid damaging or bruising the product . In this work, a flexible pressure sensor that is both low cost and easy to fabricate is integrated with robotic grippers for work ing with produce of varying shape s, sizes, and stiffness es . The sensor is successfully integrated with both a rigid robotic gripper, as well as a pneumatically actuated soft finger. Furthermore, an algorithm is proposed for acce lerated estimation of the steady - state value of the sensor output based on the transient response data, to enable real - time applications. The sensor is shown to be effective in incorporating feedback to correctly grasp objects of unknown sizes and stiffnesses . At the same time, the sensor provid es estimates for these values which can be utilized for identification of qualities such as ripeness levels and bruising . It is also shown to be able to provide force feedback for objects of variable stiffness es . Th is enables future use not only for produce identification, but also for tasks such as quality control and selective distribution based on ripeness levels . Keywords: Robotics, sensing, p roduce handling, grasping Highlights: Low - cost and easy - to - fabricate sensor for easy implementation with a variety of robotic grippers Fast estimation of settled resistance using exponential decay curve fit Measurements of grasping force and stiffness of a held object V arious produce handling features such as ripeness monitoring, bruising detection, and size estimation 1. Introduction: The use of robotic end - effectors for securely grasping objects is a pivotal component in manipulation tasks .


M3D-skin: Multi-material 3D-printed Tactile Sensor with Hierarchical Infill Structures for Pressure Sensing

Yoshimura, Shunnosuke, Kawaharazuka, Kento, Okada, Kei

arXiv.org Artificial Intelligence

Tactile sensors have a wide range of applications, from utilization in robotic grippers to human motion measurement. If tactile sensors could be fabricated and integrated more easily, their applicability would further expand. In this study, we propose a tactile sensor-M3D-skin-that can be easily fabricated with high versatility by leveraging the infill patterns of a multi-material fused deposition modeling (FDM) 3D printer as the sensing principle. This method employs conductive and non-conductive flexible filaments to create a hierarchical structure with a specific infill pattern. The flexible hierarchical structure deforms under pressure, leading to a change in electrical resistance, enabling the acquisition of tactile information. We measure the changes in characteristics of the proposed tactile sensor caused by modifications to the hierarchical structure. Additionally, we demonstrate the fabrication and use of a multi-tile sensor. Furthermore, as applications, we implement motion pattern measurement on the sole of a foot, integration with a robotic hand, and tactile-based robotic operations. Through these experiments, we validate the effectiveness of the proposed tactile sensor.



A control scheme for collaborative object transportation between a human and a quadruped robot using the MIGHTY suction cup

Plotas, Konstantinos, Papadakis, Emmanouil, Drosakis, Drosakis, Trahanias, Panos, Papageorgiou, Dimitrios

arXiv.org Artificial Intelligence

Please find the citation info @ Zenodo, as the proceedings of ICRA are no longer sent to IEEE Xplore. This is a pre-print version of the paper presented at IEEE International Conference on Robotics and Automation 2025 (ICRA), Atlanta, US. Abstract -- In this work, a control scheme for human-robot collaborative object transportation is proposed, considering a quadruped robot equipped with the MIGHTY suction cup that serves both as a gripper for holding the object and a force/torque sensor . The proposed control scheme is based on the notion of admittance control, and incorporates a variable damping term aiming towards increasing the controllability of the human and, at the same time, decreasing her/his effort. Furthermore, to ensure that the object is not detached from the suction cup during the collaboration, an additional control signal is proposed, which is based on a barrier artificial potential. The proposed control scheme is proven to be passive and its performance is demonstrated through experimental evaluations conducted using the Unitree Go1 robot equipped with the MIGHTY suction cup.


Aucamp: An Underwater Camera-Based Multi-Robot Platform with Low-Cost, Distributed, and Robust Localization

Xu, Jisheng, Lin, Ding, Fong, Pangkit, Fang, Chongrong, Duan, Xiaoming, He, Jianping

arXiv.org Artificial Intelligence

This paper introduces an underwater multi-robot platform, named Aucamp, characterized by cost-effective monocular-camera-based sensing, distributed protocol and robust orientation control for localization. We utilize the clarity feature to measure the distance, present the monocular imaging model, and estimate the position of the target object. We achieve global positioning in our platform by designing a distributed update protocol. The distributed algorithm enables the perception process to simultaneously cover a broader range, and greatly improves the accuracy and robustness of the positioning. Moreover, the explicit dynamics model of the robot in our platform is obtained, based on which, we propose a robust orientation control framework. The control system ensures that the platform maintains a balanced posture for each robot, thereby ensuring the stability of the localization system. The platform can swiftly recover from an forced unstable state to a stable horizontal posture. Additionally, we conduct extensive experiments and application scenarios to evaluate the performance of our platform. The proposed new platform may provide support for extensive marine exploration by underwater sensor networks.


P2P-Insole: Human Pose Estimation Using Foot Pressure Distribution and Motion Sensors

Watanabe, Atsuya, Aisuwarya, Ratna, Jing, Lei

arXiv.org Artificial Intelligence

This work presents P2P-Insole, a low-cost approach for estimating and visualizing 3D human skeletal data using insole-type sensors integrated with IMUs. Each insole, fabricated with e-textile garment techniques, costs under USD 1, making it significantly cheaper than commercial alternatives and ideal for large-scale production. Our approach uses foot pressure distribution, acceleration, and rotation data to overcome limitations, providing a lightweight, minimally intrusive, and privacy-aware solution. The system employs a Transformer model for efficient temporal feature extraction, enriched by first and second derivatives in the input stream. Including multimodal information, such as accelerometers and rotational measurements, improves the accuracy of complex motion pattern recognition. These facts are demonstrated experimentally, while error metrics show the robustness of the approach in various posture estimation tasks. This work could be the foundation for a low-cost, practical application in rehabilitation, injury prevention, and health monitoring while enabling further development through sensor optimization and expanded datasets.


Learning-Based Leader Localization for Underwater Vehicles With Optical-Acoustic-Pressure Sensor Fusion

Yang, Mingyang, Sha, Zeyu, Zhang, Feitian

arXiv.org Artificial Intelligence

Underwater vehicles have emerged as a critical technology for exploring and monitoring aquatic environments. The deployment of multi-vehicle systems has gained substantial interest due to their capability to perform collaborative tasks with improved efficiency. However, achieving precise localization of a leader underwater vehicle within a multi-vehicle configuration remains a significant challenge, particularly in dynamic and complex underwater conditions. To address this issue, this paper presents a novel tri-modal sensor fusion neural network approach that integrates optical, acoustic, and pressure sensors to localize the leader vehicle. The proposed method leverages the unique strengths of each sensor modality to improve localization accuracy and robustness. Specifically, optical sensors provide high-resolution imaging for precise relative positioning, acoustic sensors enable long-range detection and ranging, and pressure sensors offer environmental context awareness. The fusion of these sensor modalities is implemented using a deep learning architecture designed to extract and combine complementary features from raw sensor data. The effectiveness of the proposed method is validated through a custom-designed testing platform. Extensive data collection and experimental evaluations demonstrate that the tri-modal approach significantly improves the accuracy and robustness of leader localization, outperforming both single-modal and dual-modal methods.


Leader-follower formation enabled by pressure sensing in free-swimming undulatory robotic fish

Panta, Kundan, Deng, Hankun, DeLattre, Micah, Cheng, Bo

arXiv.org Artificial Intelligence

Fish use their lateral lines to sense flows and pressure gradients, enabling them to detect nearby objects and organisms. Towards replicating this capability, we demonstrated successful leader-follower formation swimming using flow pressure sensing in our undulatory robotic fish ($\mu$Bot/MUBot). The follower $\mu$Bot is equipped at its head with bilateral pressure sensors to detect signals excited by both its own and the leader's movements. First, using experiments with static formations between an undulating leader and a stationary follower, we determined the formation that resulted in strong pressure variations measured by the follower. This formation was then selected as the desired formation in free swimming for obtaining an expert policy. Next, a long short-term memory neural network was used as the control policy that maps the pressure signals along with the robot motor commands and the Euler angles (measured by the onboard IMU) to the steering command. The policy was trained to imitate the expert policy using behavior cloning and Dataset Aggregation (DAgger). The results show that with merely two bilateral pressure sensors and less than one hour of training data, the follower effectively tracked the leader within distances of up to 200 mm (= 1 body length) while swimming at speeds of 155 mm/s (= 0.8 body lengths/s). This work highlights the potential of fish-inspired robots to effectively navigate fluid environments and achieve formation swimming through the use of flow pressure feedback.